Goto

Collaborating Authors

 Palm Beach Gardens


How and why parents and teachers are introducing young children to AI

The Guardian

Since the release of ChatGPT in late 2022, generative artificial intelligence has trickled down from adults in their offices to university students in campus libraries to teenagers in high school hallways. Now it's reaching the youngest among us, and parents and teachers are grappling with the most responsible way to introduce their under-13s to a new technology that may fundamentally reshape the future. Though the terms of service for ChatGPT, Google's Gemini and other AI models specify that the tools are only meant for those over 13, parents and teachers are taking the matter of AI education into their own hands. Inspired by a story we published on parents who are teaching their children to use AI to set them up for success in school and at work, we asked Guardian readers how and why – or why not – others are doing the same. Though our original story only concerned parents, we have also included teachers in the responses published below, as preparing children for future studies and jobs is one of educators' responsibilities as well.


Robotic Putting Greens. Mixed Reality. Loud Spectators. This Is Golf?!

WIRED

Cameron young slides a driver from his bag. He stares at a hole referred to as Texas Hill Country. It's new to him--a par 4 with sand hazards and rough to avoid. The 26-year-old is in the top 20 in the Official World Golf Ranking, but he's not sure how to proceed. He turns to his companion, former pro Roberto Castro.


Gaze-Driven Sentence Simplification for Language Learners: Enhancing Comprehension and Readability

Higasa, Taichi, Tanaka, Keitaro, Feng, Qi, Morishima, Shigeo

arXiv.org Artificial Intelligence

Language learners should regularly engage in reading challenging materials as part of their study routine. Nevertheless, constantly referring to dictionaries is time-consuming and distracting. This paper presents a novel gaze-driven sentence simplification system designed to enhance reading comprehension while maintaining their focus on the content. Our system incorporates machine learning models tailored to individual learners, combining eye gaze features and linguistic features to assess sentence comprehension. When the system identifies comprehension difficulties, it provides simplified versions by replacing complex vocabulary and grammar with simpler alternatives via GPT-3.5. We conducted an experiment with 19 English learners, collecting data on their eye movements while reading English text. The results demonstrated that our system is capable of accurately estimating sentence-level comprehension. Additionally, we found that GPT-3.5 simplification improved readability in terms of traditional readability metrics and individual word difficulty, paraphrasing across different linguistic levels.


Bridging the Gap: Gaze Events as Interpretable Concepts to Explain Deep Neural Sequence Models

Krakowczyk, Daniel G., Prasse, Paul, Reich, David R., Lapuschkin, Sebastian, Scheffer, Tobias, Jäger, Lena A.

arXiv.org Artificial Intelligence

Recent work in XAI for eye tracking data has evaluated the suitability of feature attribution methods to explain the output of deep neural sequence models for the task of oculomotric biometric identification. These methods provide saliency maps to highlight important input features of a specific eye gaze sequence. However, to date, its localization analysis has been lacking a quantitative approach across entire datasets. In this work, we employ established gaze event detection algorithms for fixations and saccades and quantitatively evaluate the impact of these events by determining their concept influence. Input features that belong to saccades are shown to be substantially more important than features that belong to fixations. By dissecting saccade events into sub-events, we are able to show that gaze samples that are close to the saccadic peak velocity are most influential. We further investigate the effect of event properties like saccadic amplitude or fixational dispersion on the resulting concept influence.


An Extensive Study of User Identification via Eye Movements across Multiple Datasets

Zaidawi, Sahar Mahdie Klim Al, Prinzler, Martin H. U., Lührs, Jonas, Maneth, Sebastian

arXiv.org Artificial Intelligence

Several studies have reported that biometric identification based on eye movement characteristics can be used for authentication. This paper provides an extensive study of user identification via eye movements across multiple datasets based on an improved version of method originally proposed by George and Routray. We analyzed our method with respect to several factors that affect the identification accuracy, such as the type of stimulus, the IVT parameters (used for segmenting the trajectories into fixation and saccades), adding new features such as higher-order derivatives of eye movements, the inclusion of blink information, template aging, age and gender.We find that three methods namely selecting optimal IVT parameters, adding higher-order derivatives features and including an additional blink classifier have a positive impact on the identification accuracy. The improvements range from a few percentage points, up to an impressive 9 % increase on one of the datasets.



Learning to Predict Intent from Gaze During Robotic Hand-Eye Coordination

Razin, Yosef (Georgia Institute of Technology) | Feigh, Karen (Georgia Institute of Technology)

AAAI Conferences

Effective human-aware robots should anticipate their user’s intentions. During hand-eye coordination tasks, gaze often precedes hand motion and can serve as a powerful predictor for intent. However, cooperative tasks where a semi-autonomous robot serves as an extension of the human hand have rarely been studied in the context of hand-eye coordination. We hypothesize that accounting for anticipatory eye movements in addition to the movements of the robot will improve intent estimation. This research compares the application of various machine learning methods to intent prediction from gaze tracking data during robotic hand-eye coordination tasks. We found that with proper feature selection, accuracies exceeding 94% and AUC greater than 91% are achievable with several classification algorithms but that anticipatory gaze data did not improve intent prediction.